Location Estimation Algorithm of Docent Robot in Art Gallery Using Object Detection
Mi-Hyeon Cheon and Donghwa Lee
Division of Computer and Communication Engineering, Daegu University,
201 Daegudaero, Jillyang, Gyeongsan, Gyeonbuk, Republic of Korea
Abstract
Docent robots to replace the role of a docent are being actively researched. However, existing docent robots require installation of landmarks in various places in art galleries, and are expensive because they require high-priced sensors and depth cameras. To solve these problems, this paper proposes an algorithm for detecting specific objects and recognizing the current location of the robot using a single camera instead of various sensors and depth cameras.
Keywords-component: object detection, location estimation
1. Introduction
The number of art gallery users is increasing with the increase of national income and improved standard of living. Consequently, the role of a docent is becoming important and the development of docent robots is increasing. However, most existing docent robots require installation of landmarks, magnetic wires, RFID tags, etc. in various places in the exhibition hall so that they can recognize their current location or a specific work of art[1-2]. This study proposes an algorithm that enables the docent robots to identify their current location through the exhibits that are already installed in the exhibition hall instead of using the conventional method.
2. Location Recognition Using Object Detection
Fig. 1 Proposed Algorithm
The whole process of the algorithm proposed in this paper is shown in Fig. 1. Considering the fact that most art galleries have white backgrounds and exhibits attached to the wall, the exhibition hall environment was virtually created for experiments.
At first, the input images were converted to black and white. Then, the Gaussian filter was applied and image binarization was performed to remove noises. Considering the fact that most works of art have rectangular shapes, the outlines of objects were extracted and only the objects having four vertices were detected as works of art. Extracting information about the works of art from the entire area of images input through the camera has the problem of increasing the amount of computations. Thus, in this study, the rectangular detected areas were separated so that the SURF algorithm can be applied only within the area of the work of art. Because the cameras are mostly positioned at the bottom of the docent robots, the photographed images will be images of looking up from below. In this case, the shapes of the exhibits change and the matching accuracy of the SURF algorithm can be decreased. Therefore, the extracted objects were converted to rectangular shapes using the warping method. Because the exhibits in an art gallery have the same sizes, the sizes of the works to be detected are stored in advance and then compared with the detected rectangular area to determine the work information of the detected area by comparing it with the detected rectangular area.
Fig. 2 Experimental Result
In this study, two images were compared using the SURF algorithm[3], which extracts strong characteristics at a high speed. The distance between the camera and the work that is being shot can be calculated by using the ratio of the size of the detected object to the size of the actual work. Using this result, the current location of the robot was marked on the existing map. Fig. 2 shows the detected object and the location of the camera at the time, marked by a dot on the map.
3. Conclusion
This paper proposed an algorithm for recognizing the locations of docent robots by detecting the installed exhibits as objects in order to remove the inconvenience of estimating position information through RFIDs or tags installed in the exhibition hall.
Acknowledgment
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Education) (NRF-2016R1D1A1B03934666).
References
[1] Bong-Woo Kwak, Soon-Gill Park, Young-Jae Ryoo, Dae-Yeong Im, Hyun-Rok Cha., “Docent Robot using Magnetic Guidance and RFID,” Proceedings of KIIS Fall Conference, Vol. 20, No. 2, pp. 149-151, 2010.
[2] Moon-Seok Chae, Tae-Kyu Yang., “A Study on Precise Localization for Mobile Robot Based on Artificial Landmarks in the Ceiling,” Journal of Korean Institute of Information Technology, Vol 9, No. 8, pp. 85-92, 2011.
[3] H. Bay, T. Tuytelaars, and L. V. Gool., “Surf: Speeded up robust features,” European Conference on Computer Vision, Vol. 3951, pp. 404-417, 2006.